18 research outputs found

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Deep Burst Denoising

    Full text link
    Noise is an inherent issue of low-light image capture, one which is exacerbated on mobile devices due to their narrow apertures and small sensors. One strategy for mitigating noise in a low-light situation is to increase the shutter time of the camera, thus allowing each photosite to integrate more light and decrease noise variance. However, there are two downsides of long exposures: (a) bright regions can exceed the sensor range, and (b) camera and scene motion will result in blurred images. Another way of gathering more light is to capture multiple short (thus noisy) frames in a "burst" and intelligently integrate the content, thus avoiding the above downsides. In this paper, we use the burst-capture strategy and implement the intelligent integration via a recurrent fully convolutional deep neural net (CNN). We build our novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames. We show that it achieves state of the art denoising results on our burst dataset, improving on the best published multi-frame techniques, such as VBM4D and FlexISP. Finally, we explore other applications of image enhancement by integrating content from multiple frames and demonstrate that our DNN architecture generalizes well to image super-resolution

    GIA-Net: Global Information Aware Network for Low-light Imaging

    Full text link
    It is extremely challenging to acquire perceptually plausible images under low-light conditions due to low SNR. Most recently, U-Nets have shown promising results for low-light imaging. However, vanilla U-Nets generate images with artifacts such as color inconsistency due to the lack of global color information. In this paper, we propose a global information aware (GIA) module, which is capable of extracting and integrating the global information into the network to improve the performance of low-light imaging. The GIA module can be inserted into a vanilla U-Net with negligible extra learnable parameters or computational cost. Moreover, a GIA-Net is constructed, trained and evaluated on a large scale real-world low-light imaging dataset. Experimental results show that the proposed GIA-Net outperforms the state-of-the-art methods in terms of four metrics, including deep metrics that measure perceptual similarities. Extensive ablation studies have been conducted to verify the effectiveness of the proposed GIA-Net for low-light imaging by utilizing global information.Comment: 16 pages 6 figures; accepted to AIM at ECCV 202

    Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks

    No full text
    © Springer Nature Switzerland AG 2018. We propose a neural approach for fusing an arbitrary-length burst of photographs suffering from severe camera shake and noise into a sharp and noise-free image. Our novel convolutional architecture has a simultaneous view of all frames in the burst, and by construction treats them in an order-independent manner. This enables it to effectively detect and leverage subtle cues scattered across different frames, while ensuring that each frame gets a full and equal consideration regardless of its position in the sequence. We train the network with richly varied synthetic data consisting of camera shake, realistic noise, and other common imaging defects. The method demonstrates consistent state of the art burst image restoration performance for highly degraded sequences of real-world images, and extracts accurate detail that is not discernible from any of the individual frames in isolation

    ‘The things you didn’t do’: Gender, slut-shaming, and the need to address sexual harassment in narrative resources addressing sexting and cyberbullying

    No full text
    This chapter reports on research examining young people’s understandings of gender roles in everyday digital cultures and communication technologies, and in relation to sexting practices. A cyber-safety narrative film that addresses sexting, cyberbullying, and digital citizenship was used as a springboard for focus group discussions with 24 young people in Victoria, Australia. The chapter outlines the key findings regarding how young people understood and explained common gender dynamics in relation to bullying, cyberbullying, and sexting, reflecting as they did in these discussions on both the gender relations depicted in commonly used cyber-safety narrative resources, as well as in their own social lives. The chapter describes a discussion that arose among female participants around the ‘slut’ label, concerns about the possibility for sexual rumours to be spread via digital social networks, and associated on- and offline harassment over sexual things they had not actually done. This discussion, it is argued, illustrates the way girls feel responsible for protecting themselves from the potential psychic injuries of the slut label through strict sexual self-regulation, knowing that they cannot control malevolent and frequent use of this label by peers on- and offline. Future narrative resources that seek to address sexting and cyberbullying need to more clearly identify and respond to sexual harassment and sexism as a persistent feature of young people’s digital and school cultures
    corecore